662 research outputs found

    The relation between VLDL-cholesterol and risk of cardiovascular events in patients with manifest cardiovascular disease

    Get PDF
    INTRODUCTION: Apolipoprotein B containing lipoproteins are atherogenic. There is evidence that with low plasma low density lipoprotein cholesterol (LDL-C) levels residual vascular risk might be caused by triglyceride rich lipoproteins such as very-low density lipoproteins (VLDL), chylomicrons and their remnants. We investigated the relationship between VLDL-cholesterol (VLDL-C) and recurrent major adverse cardiovascular events (MACE), major adverse limb events (MALE) and all-cause mortality in a cohort of patients with cardiovascular disease. METHODS: Prospective cohort study in 8057 patients with cardiovascular disease from the UCC-SMART study. The relation between calculated VLDL-C levels and the occurrence of MACE, MALE and all-cause mortality was analyzed with Cox regression models. RESULTS: Patients mean age was 60 ± 10 years, 74% were male, 4894 (61%) had coronary artery disease, 2445 (30%) stroke, 1425 (18%) peripheral arterial disease and 684 (8%) patients had an abdominal aorta aneurysm at baseline. A total of 1535 MACE, 571 MALE and 1792 deaths were observed during a median follow up of 8.2 years (interquartile range 4.512.2). VLDL-C was not associated with risk of MACE or all-cause mortality. In the highest quartile of VLDL-C the risk was higher for major adverse limb events (MALE) (HR 1.49; 95%CI 1.16-1.93) compared to the lowest quartile, after adjustment for confounders including LDL-C and lipid lowering medication. CONCLUSION: In patients with clinically manifest cardiovascular disease plasma VLDL-C confers an increased risk for MALE, but not for MACE and all-cause mortality, independent of established risk factors including LDL-C and lipid-lowering medication

    Relationship between classic vascular risk factors and cumulative recurrent cardiovascular event burden in patients with clinically manifest vascular disease: Results from the UCC-SMART prospective cohort study

    Get PDF
    Objective The aim of the current study was to assess the relationship between classic cardiovascular risk factors and risk of not only the first recurrent atherosclerotic cardiovascular event, but also the total number of non-fatal and fatal cardiovascular events in patients with recently clinically manifest cardiovascular disease (CVD). Design Prospective cohort study. Setting Tertiary care centre. Participants 7239 patients with a recent first manifestation of CVD from the prospective UCC-SMART (Utrecht Cardiovascular Cohort-Second Manifestations of ARTerial disease) cohort study. Outcome measures Total cardiovascular events, including myocardial infarction, stroke, vascular interventions, major limb events and cardiovascular mortality. Results During a median follow-up of 8.9 years, 1412 patients had one recurrent cardiovascular event, while 1290 patients had two or more recurrent events, with a total of 5457 cardiovascular events during follow-up. The HRs for the first recurrent event and cumulative event burden using Prentice-Williams-Peterson models, respectively, were 1.36 (95% CI 1.25 to 1.48) and 1.26 (95% CI 1.17 to 1.35) for smoking, 1.14 (95% CI 1.11 to 1.18) and 1.09 (95% CI 1.06 to 1.12) for non-high-density lipoprotein (HDL) cholesterol, and 1.05 (95% CI 1.03 to 1.07) and 1.04 (95% CI 1.03 to 1.06) for systolic blood pressure per 10 mm Hg. Conclusions In a cohort of patients with established CVD, systolic blood pressure, non-HDL cholesterol and current smoking are important risk factors for not only the first, but also subsequent recurrent events during follow-up. Recurrent event analysis captures the full cumulative burden of CVD in patients

    Evaluating a cardiovascular disease risk management care continuum within a learning healthcare system: a prospective cohort study

    Get PDF
    Background: Many patients now present with multimorbidity and chronicity of disease. This means that multidisciplinary management in a care continuum, integrating primary care and hospital care services, is needed to ensure high quality care. Aim: To evaluate cardiovascular risk management (CVRM) via linkage of health data sources, as an example of a multidisciplinary continuum within a learning healthcare system (LHS). Design & setting: In this prospective cohort study, data were linked from the Utrecht Cardiovascular Cohort (UCC) to the Julius General Practitioners' Network (JGPN) database. UCC offers structured CVRM at referral to the University Medical Centre (UMC) Utrecht. JGPN consists of electronic health record (EHR) data from referring GPs. Method: The cardiovascular risk factors were extracted for each patient 13 months before referral (JGPN), at UCC inclusion, and during 12 months follow-up (JGPN). The following areas were assessed: registration of risk factors; detection of risk factor(s) requiring treatment at UCC; communication of risk factors and actionable suggestions from the specialist to the GP; and change of management during follow-up. Results: In 52% of patients, >1 risk factors were registered (that is, extractable from structured fields within routine care health records) before UCC. In 12%—72% of patients, risk factor(s) existed that required (change or start of) treatment at UCC inclusion. Specialist communication included the complete risk profile in 67% of letters, but lacked actionable suggestions in 86%. In 29% of patients, at least one risk factor was registered after UCC. Change in management in GP records was seen in 21%-58% of them. Conclusion: Evaluation of a multidisciplinary LHS is possible via linkage of health data sources. Efforts have to be made to improve registration in primary care, as well as communication on findings and actionable suggestions for follow-up to bridge the gap in the CVRM continuum

    Predictive value of noninvasive measures of atherosclerosis for incident myocardial infarction: the Rotterdam Study.

    Get PDF
    BACKGROUND: Several noninvasive methods are available to investigate the severity of extracoronary atherosclerotic disease. No population-based study has yet examined whether differences exist between these measures with regard to their predictive value for myocardial infarction (MI) or whether a given measure of atherosclerosis has predictive value independently of the other measures. METHODS AND RESULTS: At the baseline (1990-1993) examination of the Rotterdam Study, a population-based cohort study among subjects age > or =55 years, carotid plaques and intima-media thickness (IMT) were measured by ultrasound, abdominal aortic atherosclerosis by x-ray, and lower-extremity atherosclerosis by computation of the ankle-arm index. In the present study, 6389 subjects were included; 258 cases of incident MI occurred before January 1, 2000. All 4 measures of atherosclerosis were good predictors of MI independently of traditional cardiovascular risk factors. Hazard ratios were equally high for carotid plaques (1.83 [1.27 to 2.62], severe versus no atherosclerosis), carotid IMT (1.95 [1.19 to 3.19]), and aortic atherosclerosis (1.94 [1.30 to 2.90]) and slightly lower for lower-extremity atherosclerosis (1.59 [1.05 to 2.39]), although differences were small. The hazard ratio for MI for subjects with severe atherosclerosis according to a composite atherosclerosis score was 2.77 (1.70 to 4.52) compared with subjects with no atherosclerosis. The predictive value of MI for a given measure of atherosclerosis was independent of the other atherosclerosis measures. CONCLUSIONS: Noninvasive measures of extracoronary atherosclerosis are strong predictors of MI. The relatively crude measures directly assessing plaques in the carotid artery and abdominal aorta predict MI equally well as the more precisely measured carotid IMT

    Automatic Prediction of Recurrence of Major Cardiovascular Events: A Text Mining Study Using Chest X-Ray Reports

    Get PDF
    Background and Objective. Electronic health records (EHRs) contain free-text information on symptoms, diagnosis, treatment, and prognosis of diseases. However, this potential goldmine of health information cannot be easily accessed and used unless proper text mining techniques are applied. The aim of this project was to develop and evaluate a text mining pipeline in a multimodal learning architecture to demonstrate the value of medical text classification in chest radiograph reports for cardiovascular risk prediction. We sought to assess the integration of various text representation approaches and clinical structured data with state-of-the-art deep learning methods in the process of medical text mining. Methods. We used EHR data of patients included in the Second Manifestations of ARTerial disease (SMART) study. We propose a deep learning-based multimodal architecture for our text mining pipeline that integrates neural text representation with preprocessed clinical predictors for the prediction of recurrence of major cardiovascular events in cardiovascular patients. Text preprocessing, including cleaning and stemming, was first applied to filter out the unwanted texts from X-ray radiology reports. Thereafter, text representation methods were used to numerically represent unstructured radiology reports with vectors. Subsequently, these text representation methods were added to prediction models to assess their clinical relevance. In this step, we applied logistic regression, support vector machine (SVM), multilayer perceptron neural network, convolutional neural network, long short-term memory (LSTM), and bidirectional LSTM deep neural network (BiLSTM). Results. We performed various experiments to evaluate the added value of the text in the prediction of major cardiovascular events. The two main scenarios were the integration of radiology reports (1) with classical clinical predictors and (2) with only age and sex in the case of unavailable clinical predictors. In total, data of 5603 patients were used with 5-fold cross-validation to train the models. In the first scenario, the multimodal BiLSTM (MI-BiLSTM) model achieved an area under the curve (AUC) of 84.7%, misclassification rate of 14.3%, and F1 score of 83.8%. In this scenario, the SVM model, trained on clinical variables and bag-of-words representation, achieved the lowest misclassification rate of 12.2%. In the case of unavailable clinical predictors, the MI-BiLSTM model trained on radiology reports and demographic (age and sex) variables reached an AUC, F1 score, and misclassification rate of 74.5%, 70.8%, and 20.4%, respectively. Conclusions. Using the case study of routine care chest X-ray radiology reports, we demonstrated the clinical relevance of integrating text features and classical predictors in our text mining pipeline for cardiovascular risk prediction. The MI-BiLSTM model with word embedding representation appeared to have a desirable performance when trained on text data integrated with the clinical variables from the SMART study. Our results mined from chest X-ray reports showed that models using text data in addition to laboratory values outperform those using only known clinical predictors

    Missing data is poorly handled and reported in prediction model studies using machine learning: a literature review

    Get PDF
    OBJECTIVES: Missing data is a common problem during the development, evaluation, and implementation of prediction models. Although machine learning (ML) methods are often said to be capable of circumventing missing data, it is unclear how these methods are used in medical research. We aim to find out if and how well prediction model studies using machine learning report on their handling of missing data. STUDY DESIGN AND SETTING: We systematically searched the literature on published papers between 2018 and 2019 about primary studies developing and/or validating clinical prediction models using any supervised ML methodology across medical fields. From the retrieved studies information about the amount and nature (e.g. missing completely at random, potential reasons for missingness) of missing data and the way they were handled were extracted. RESULTS: We identified 152 machine learning-based clinical prediction model studies. A substantial amount of these 152 papers did not report anything on missing data (n = 56/152). A majority (n = 96/152) reported details on the handling of missing data (e.g., methods used), though many of these (n = 46/96) did not report the amount of the missingness in the data. In these 96 papers the authors only sometimes reported possible reasons for missingness (n = 7/96) and information about missing data mechanisms (n = 8/96). The most common approach for handling missing data was deletion (n = 65/96), mostly via complete-case analysis (CCA) (n = 43/96). Very few studies used multiple imputation (n = 8/96) or built-in mechanisms such as surrogate splits (n = 7/96) that directly address missing data during the development, validation, or implementation of the prediction model. CONCLUSION: Though missing values are highly common in any type of medical research and certainly in the research based on routine healthcare data, a majority of the prediction model studies using machine learning does not report sufficient information on the presence and handling of missing data. Strategies in which patient data are simply omitted are unfortunately the most often used methods, even though it is generally advised against and well known that it likely causes bias and loss of analytical power in prediction model development and in the predictive accuracy estimates. Prediction model researchers should be much more aware of alternative methodologies to address missing data
    • …
    corecore